Contents |
LIDAR, is an acronym which stands for Light Detection And Ranging based on a laser-radar paradigm. RADAR (Radio Detection And Ranging) is the process of transmitting, receiving, detecting, and processing and electromagnetic wave that reflects from a target, a technology that was developed by the German Army in 1935.[1] All ranging systems function by transmitting and receiving electromagnet energy with the primary difference between RADAR and LIDAR being the frequency bands.[2]
The term 3D Flash LIDAR refers to the 3D point cloud and intensity data that is captured by a 3D Flash LIDAR imaging system. 3D Flash LIDAR enables real-time 3D imaging capturing 3D depth and intensity data characterize by lack of platform motion distortion or scanning artifacts. When used on a moving vehicle or platforms, blur-free imaging is expected. This is as the result of using single laser pulses to illuminate the field-of-view <FOV> creating an entire frame. Because the integration time is fast (e.g. at 100 meters range, capture requires 660 nano-seconds), the ability to produce real-time 3D video streams consisting of absolute range and co-registered intensity (albedo) for use in autonomous applications is a natural fit.
Because there 3D Flash LIDAR systems are solid state (no moving parts), they carry less mass than alternative LiDAR cameras (i.e. scanners). Capture distances range from 5 cm to 5 km.[3]
3D Flash LIDAR camera’s readout semiconductors enable each pixel in the focal plane array to act independently and measure range and intensity of every pixel (point) in the camera's field of view. Using an Avalanche photodiode Detector (APD) hybridized with a CMOS focal plane array, the 3D Flash LIDAR camera operates like a 2D digital camera with “smart 3D pixels” in an array recording the time the camera’s laser pulse requires to travel to and from the objects illuminated by the laser in the scene.
The image capture process using 3D Flash LIDAR is: 1. A short duration pulsed laser able to illuminate the area in front of the camera. This illumination is "back-scattered" to the receiver by the objects in front of the camera (the scene).
2. The laser’s aperture uses a diffuser to shape and convert the laser output to a square “top hat” pattern, increasing the efficiency and uniformity of the illumination while matching the field of view. This creates a number of opportunities to enhance specific illumination attributes that result in better imaging of challenging objects such as wires at a long range.
3. The photonic energy back-scatter is collected by an optical lens and focused onto the hybrid focal plane array of 3D pixels.
4. In one use model, the solid state APD detector pixels produce an avalanche of photo-electrons from the incoming photons where each pixel. The gain of the APD detector determines how many electrons are produced per photon.[4]
5. A second use model substitutes PIN diodes for the APD detector, using the same laser wavelengths as the APDs.
6. A third model provides for using CMOS detectors, but these involve using lasers in the visible spectrum for illumination and have limited range.
In all use cases, the 3D focal plane:
A. images a the scene using a lens to focus the returning laser energy, B. has independent triggers and counters to record the time-of-flight of the laser pulse to and from the objects, and C. records a time sample of the returned pulse and D. records the intensity (albedo) of the reflection
Each pixel in the detector array captures independently and is connected to an amplifying and threshold circuit on the CMOS read-out IC (ROIC). A ‘counter’ measures the time-of-flight (TOF) of the reflected light imaged on the APD/CMOS hybrid array. The range to any object and surface in the scene is computed along with the intensity of that reflected light. Both are used to produce 3D video output at various frame rates (e.g. 1 – 60 Hz).
3D Flash LIDAR sensor engines are able to capture laser wavelengths ranging from 500 nm to 1700 nm depending on which semiconductor detector material is used. The prevalent use model favors InGaAs because of the efficiencies achieved at 1.06 to 1.7 micron laser wavelengths. Because the nature of 3D Flash LIDAR focal planes favors dissimilar materials (e.g. InGaAs and CMOS), 3D Flash LIDAR focal planes are created by hybridizing the focal plane detector and read-out semiconductor material by using indium "bumps" to create the inter-chip connections resulting in the focal plane array.
Typical laser wavelength for 3D Flash LIDAR cameras used in proximity to humans is the eye-safe wavelength of 1.57 microns; preferred because it is blocked by the cornea preserving the integrity of retinas.
Objects in the field of view at the same range may return differing numbers of photons based upon their reflective characteristics resulting in a distinct difference in intensity values (2D image). This intensity difference is useful when imaging diverse environments such as city streets with distinctive lane markings and street signs(e.g. road surface vs. markings).
Because the velocity of light is a universal constant, accurate range data is a direct and simple calculation as opposed to non-time-of-flight imaging systems whose range is interpolated.
A simpler and more elegant "time-of-flight" 3D camera was invented in the early 90's. This 3D camera does not require counters to measure the time-of-flight of the illuminating pulse, but instead measures this time indirectly by synchronous gating of the received pulse and detecting the received photons in an ordinary image sensor. A homodyne detection is accomplished this way with the number of photons being proportional to distance.[5] Most practical time-of-flight cameras measure the photons of the received pulse after a single synchronous gating with an electronic shutter.[6][7]
3D Flash LIDAR Video cameras provide accurate 3D representation (models) of the scene including measurements, real-time imaging through obscuration such as dust, cloud or smoke. The "framing camera" nature of 3D Flash LIDAR cameras and the real-time 3D video output makes them ideal for fast moving-vehicle solutions such as automotive or aviation.
Supporting various data capture modes, the 3D data streamed from a 3D FLVC is presented in three forms, RAW data, Range & Intensity (R&I) or SULAR data. RAW data, as the name suggests, is raw data time-of-flight and intensity data for each pixel for each frame including 20 or more pulse shape samples per laser pulse. The RAW mode allows various algorithms to be applied to the raw data post capture. The R&I mode assumes RAW data is processed against algorithms on-camera in real-time and represents an entire frame of x, y, depth(z) and intensity(i) data for all the pixels in the entire frame.
SULAR, an acronym derived from Staring Underwater Laser Radar, data is captured using a gated mode where the individual pixels could be triggered by obscuration such as dust or smoke. In this mode, the initial trigger is suppressed and the pulse sampling occurs in all pixels simultaneously at specified increments. In this mode, the hard target triggering is suppressed to avoid imaging just the outer edge of the obscuration and a sequence of variable-sized range gates, can be applied at predetermined depth. For a given laser pulse, within the camera field of view, the volume can automatically be moved deeper into the obscuration with each successive laser pulse.
The SULAR range gating is effective because it captures light and doesn’t integrate all the light reflected by the obscuration surface. In this way, the noise is greatly reduced and the signal to noise ratio increased to the level of detection. Some photons get through the obscuration, reflect from the targets within the obscuration and reflect back out through the obscuration to be collected by the receive aperture and focused on the focal plane array.
A diffuse attenuation length characterizes this process where the number of photons transported through one attenuation length are about 1/3 those that started into the attenuating or scattering media. Objects can typically be 3D imaged at 3 to 5 attenuation lengths.
3D Flash LIDAR camera systems are available from Advanced Scientific Concept, Inc. (ASC 3D) and Raytheon Vision Systems (RVS), both based in Santa Barbara, California. While ASC has shipped various iterations of its products for Space (STS-127, STS-133, Space Exploration Corporation (SPACEX) Dragon vehicle), unmanned air and ground vehicles and surveillance and remains as the key contributor to the technology’s development, RVS has limited its activity to the NASA Sensor Test for Orion RelNav Risk Mitigation (STORRM) Development Test Objective (DTO) tested on STS-134. Little public information is available for the RVS solution.
ASC 3D Flash LIDAR cameras currently have the equivalent of 16,384 range-finders on each sensor chip, allowing the sensor to act as a 3D video camera with functionality well beyond just range finding. Capable of mapping ~½ million points per second, with processing done on-camera to generate the 128 X 128 range maps at 30 Hz, has demonstrated single-pulse 3D Flash LIDAR imagery with a sensor capable of such a wide range of physical ranges from centimeters to kilometers.
NASA Langley Research Center has published papers suggesting enhancement to 3D Flash LIDAR data models applicable to all 3D Flash LIDAR products. Analysis of processed Flash LIDAR data indicates that 8 times resolution (Super Resolution) enhancement is feasible. Study of the processed data also shows reduction in random noise as multiple image frames are blended to create a single high resolution DEM.[8]
• Full frame time-of-flight data collected with single laser pulse
• Unambiguous direct calculation of absolute range
• Full frame rates with area array technology
• Blur-free images without motion distortion
• Co-registration of range and intensity for each pixel
• Pixels are perfectly registered within a frame
• Ability to represent objects that are oblique to the camera
• Non-mechanical (no need for precision scanning mechanisms)
• Calibration done at manufacturing time
• Smaller and lighter than point scanning systems
• Low power consumption
• Ability to “see” into obscuration (range-gating)
• Eye-Safe laser assembly
• Combine 3D Flash LIDAR with 2D cameras (EO and IR) for 2D texture over 3D depth
• Possible to combine multiple 3D Flash LIDAR cameras for a full volumetric 3D scene
Automotive
Aviation
Defense
Marine
Robotics
Industrial
Space
Surveillance
Topographical Mapping
Transportation